Hi! My name is Faranak Halali and I am PhD student in Clinical Nutrition. I have never worked with R before, until last 2 weeks where I started to learn R basics in Datacamp. I am really excited for this course. I heared about the course from my supervisor and I thank her for that. Here is my GitHub repository link: link
This data has 183 observations for 60 variables. The variables inlcude several questions about different topics along with Age, combination score for Attitude, gender and total points.Today, I read the data from URL web page, combined the questions related to deep, surf and stra and created new columns for each. Then, I made a new dataset including only 7 variables:age, gender, attitude, points, deep, surf, stra. I scaled the combined variables to original scale by dividing each to the number of their questions. I excluded Points>0 values and the Newdata now has 166 observations of 7 variables. I saved the data into csv format using write.csv function. I read the Newdata again and explored the dimensions and structure of the data to check whether everything was done right.
Now, let’s do the analysis! First, reading the data (read.table function) and exploring the dimensions (dim function) and structure (str function). It seems right as the Newdata I hade made in data wrnagling process should have 166 observations for 7 variables.These 7 variables are: age, gender, attitude, points, deep, surf, stra. Age, attitude and Points are interval variables, gender is a two-level nominal variables and deep, surf, stra are numerical variables. I used pairs(Newdata) function to draw a plot of the relationships between all the variables. It draws 6 plots for each of the 7 variables. From what I see, most of the varaible pairs have positive relationships except for deep&surf, which shows a negative relationship.The most positive relationships seem to be between attitude&points and deep&stra because the plot dots are the most tightly gathered around the imaginary line.
Now, variables summaries! I used summary(Newdata$variable) function to explore the variable summaries. This function shows distribution of categorical variables and mean, min and max values, median (2nd quartile) and quartile 1&3 for the continuous variables. For example, age has a minimum value of 17 and a mean of 25.5. Distribution of gender is: 110 females and 56 males and so on.
Now, multiple regression analysis! I chose three independent variables to see how they regress against the dependent variable “Points”. According to scatterplot matrices, I chose Attitude, deep and stra as independent variables. I used the function lm(Points~ Attitude + stra + deep, data = Newdata) for the regression model. The I used summary() function to study the summary of the regression model. Attitude had a significant relationship with Points (p-values<0.001), but the other two independent variables did not show significant relationships. This model had a multiple R-squared of 0.20 which means about 20% of the variation in the depednent variable “Points” is caused by the three indepedent variables. for the next regression model I removed those two insignificant indepedent variables variables. According to the model summary, this new regression model had a multiple R-squared of 0.19 which means about 19% of the variation in the depednent variable “Points” is caused by the indepedent variable “Attitude”. These models highlight the important explanatory role of Attitude in Points.
Now, model diagnostics! I used the plot(my_regressionmodel2, which = c(1,2,5)) function to draw three diagnostic plots. The first plot, Residuals vs Fitted, tests the validity of model assumptions. It confirms the linearity assumption and that the regression model is linear. The second plot. QQ plot, tests whethet the errors are normally distributed and it confirms the normal distribution because the dots are well accumulated around the regression line. The last plot shows leverage level of observations and it seems there is a regular leverage with no outstanding outlier.
Read data
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
Faran <- read.table("http://www.helsinki.fi/~kvehkala/JYTmooc/JYTOPKYS3-data.txt", sep="\t", header=TRUE)
Exploring data
str(Faran)
## 'data.frame': 183 obs. of 60 variables:
## $ Aa : int 3 2 4 4 3 4 4 3 2 3 ...
## $ Ab : int 1 2 1 2 2 2 1 1 1 2 ...
## $ Ac : int 2 2 1 3 2 1 2 2 2 1 ...
## $ Ad : int 1 2 1 2 1 1 2 1 1 1 ...
## $ Ae : int 1 1 1 1 2 1 1 1 1 1 ...
## $ Af : int 1 1 1 1 1 1 1 1 1 2 ...
## $ ST01 : int 4 4 3 3 4 4 5 4 4 4 ...
## $ SU02 : int 2 2 1 3 2 3 2 2 1 2 ...
## $ D03 : int 4 4 4 4 5 5 4 4 5 4 ...
## $ ST04 : int 4 4 4 4 3 4 2 5 5 4 ...
## $ SU05 : int 2 4 2 3 4 3 2 4 2 4 ...
## $ D06 : int 4 2 3 4 4 5 3 3 4 4 ...
## $ D07 : int 4 3 4 4 4 5 4 4 5 4 ...
## $ SU08 : int 3 4 1 2 3 4 4 2 4 2 ...
## $ ST09 : int 3 4 3 3 4 4 2 4 4 4 ...
## $ SU10 : int 2 1 1 1 2 1 1 2 1 2 ...
## $ D11 : int 3 4 4 3 4 5 5 3 4 4 ...
## $ ST12 : int 3 1 4 3 2 3 2 4 4 4 ...
## $ SU13 : int 3 3 2 2 3 1 1 2 1 2 ...
## $ D14 : int 4 2 4 4 4 5 5 4 4 4 ...
## $ D15 : int 3 3 2 3 3 4 2 2 3 4 ...
## $ SU16 : int 2 4 3 2 3 2 3 3 4 4 ...
## $ ST17 : int 3 4 3 3 4 3 4 3 4 4 ...
## $ SU18 : int 2 2 1 1 1 2 1 2 1 2 ...
## $ D19 : int 4 3 4 3 4 4 4 4 5 4 ...
## $ ST20 : int 2 1 3 3 3 3 1 4 4 2 ...
## $ SU21 : int 3 2 2 3 2 4 1 3 2 4 ...
## $ D22 : int 3 2 4 3 3 5 4 2 4 4 ...
## $ D23 : int 2 3 3 3 3 4 3 2 4 4 ...
## $ SU24 : int 2 4 3 2 4 2 2 4 2 4 ...
## $ ST25 : int 4 2 4 3 4 4 1 4 4 4 ...
## $ SU26 : int 4 4 4 2 3 2 1 4 4 4 ...
## $ D27 : int 4 2 3 3 3 5 4 4 5 4 ...
## $ ST28 : int 4 2 5 3 5 4 1 4 5 2 ...
## $ SU29 : int 3 3 2 3 3 2 1 2 1 2 ...
## $ D30 : int 4 3 4 4 3 5 4 3 4 4 ...
## $ D31 : int 4 4 3 4 4 5 4 4 5 4 ...
## $ SU32 : int 3 5 5 3 4 3 4 4 3 4 ...
## $ Ca : int 2 4 3 3 2 3 4 2 3 2 ...
## $ Cb : int 4 4 5 4 4 5 5 4 5 4 ...
## $ Cc : int 3 4 4 4 4 4 4 4 4 4 ...
## $ Cd : int 4 5 4 4 3 4 4 5 5 5 ...
## $ Ce : int 3 5 3 3 3 3 4 3 3 4 ...
## $ Cf : int 2 3 4 4 3 4 5 3 3 4 ...
## $ Cg : int 3 2 4 4 4 5 5 3 5 4 ...
## $ Ch : int 4 4 2 3 4 4 3 3 5 4 ...
## $ Da : int 3 4 1 2 3 3 2 2 4 1 ...
## $ Db : int 4 3 4 4 4 5 4 4 2 4 ...
## $ Dc : int 4 3 4 5 4 4 4 4 4 4 ...
## $ Dd : int 5 4 1 2 4 4 5 3 5 2 ...
## $ De : int 4 3 4 5 4 4 5 4 4 2 ...
## $ Df : int 2 2 1 1 2 3 1 1 4 1 ...
## $ Dg : int 4 3 3 5 5 4 4 4 5 1 ...
## $ Dh : int 3 3 1 4 5 3 4 1 4 1 ...
## $ Di : int 4 2 1 2 3 3 2 1 4 1 ...
## $ Dj : int 4 4 5 5 3 5 4 5 2 4 ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ Attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
dim(Faran)
## [1] 183 60
Deep, surf, stra questions
deep_questions <- c("D03", "D11", "D19", "D27", "D07", "D14", "D22", "D30","D06", "D15", "D23", "D31")
surface_questions <- c("SU02","SU10","SU18","SU26", "SU05","SU13","SU21","SU29","SU08","SU16","SU24","SU32")
strategic_questions <- c("ST01","ST09","ST17","ST25","ST04","ST12","ST20","ST28")
deep_columns <- select(Faran,one_of(deep_questions))
surf_columns <- select(Faran, one_of(surface_questions))
stra_columns <- select(Faran, one_of(strategic_questions))
Keeping selected data
Faran$deep <- rowMeans(deep_columns)
Faran$surf <- rowMeans(surf_columns)
Faran$stra <- rowMeans(stra_columns)
keep <- c("Age", "gender", "Attitude", "Points", "deep", "surf", "stra")
Newdata <- select(Faran, one_of(keep))
Scaling
deep_scaled <- Faran$deep/12
surf_scaled <- Faran$surf/12
stra_sclaed <- Faran$stra/8
Newdata <- filter(Newdata, Points > 0)
Reading data
write.csv(Newdata, file = "Newdata.csv")
Exploring data
dim(Newdata)
## [1] 166 7
str(Newdata)
## 'data.frame': 166 obs. of 7 variables:
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ Attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
Plot matrix
pairs(Newdata)
Regression model and its summary
my_regressionmodel <- lm(Points ~ Attitude + stra + deep, data = Newdata)
summary(my_regressionmodel)
##
## Call:
## lm(formula = Points ~ Attitude + stra + deep, data = Newdata)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.5239 -3.4276 0.5474 3.8220 11.5112
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.39145 3.40775 3.343 0.00103 **
## Attitude 0.35254 0.05683 6.203 4.44e-09 ***
## stra 0.96208 0.53668 1.793 0.07489 .
## deep -0.74920 0.75066 -0.998 0.31974
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 162 degrees of freedom
## Multiple R-squared: 0.2097, Adjusted R-squared: 0.195
## F-statistic: 14.33 on 3 and 162 DF, p-value: 2.521e-08
Regression model 2 excluding nonsignificant explanators
my_regressionmodel2 <- lm(Points ~ Attitude, data = Newdata)
summary(my_regressionmodel2)
##
## Call:
## lm(formula = Points ~ Attitude, data = Newdata)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## Attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
Summary of variables
summary(Newdata$deep)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.583 3.333 3.667 3.680 4.083 4.917
summary(Newdata$surf)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.583 2.417 2.833 2.787 3.167 4.333
summary(Newdata$stra)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.250 2.625 3.188 3.121 3.625 5.000
summary(Newdata$Age)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 17.00 21.00 22.00 25.51 27.00 55.00
summary(Newdata$gender)
## F M
## 110 56
summary(Newdata$Attitude)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 14.00 26.00 32.00 31.43 37.00 50.00
summary(Newdata$Points)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 7.00 19.00 23.00 22.72 27.75 33.00
Regression diagnostic plots
plot(my_regressionmodel2, which = c(1,2,5))
Author: Faranak Halali
This week I used the data on students performance in math class and portugese language class. I started with data wrangling and then continued with data analysis. This week’s analysis is logistic regression, which is used to study the odds of success in a binary dependent variable based on explanatory variables. Reading data and exploring columns
alc <- read.csv("~/IODS-project/Data/alc.csv", header=TRUE, sep=";")
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
Data explanation: This data deals with students’ (15-22 years of age) performance in two distinct subjects: Mathematics (mat) and Portuguese language (por) in two different schools. Students answered questions on their demographic information, e.g., school name, age, sex, family size, and travel time between home and school as well as their grades in mathematics and Portugese language in three time periods (G1, G2, G3). One important outcome of this study is the amount of alcohol use among these students and how it correlates with their school performance. Their weekday and weekend alcohol consumption were measured on a 5-point scale (1 - very low to 5 - very high).
Choose 4 variables for potential hypotheses regarding their relationships with alcohol use My variables and hypotheses are: 1) sex: boys are more likely to use more alcohol. 2) absences from school: Those who have higher number of absent days in school are more likely to use more alcohol. 3) Number of go outs: Those with higher number of going out are more likely to use more alcohol. 4) family relationships: Those who experience high-quality relationship with other family members are less likely to use more alcohol.
Explore chosen variables
library(dplyr); library(ggplot2)
alc %>% group_by (sex, high_use) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups: sex [2]
## sex high_use count
## <fct> <lgl> <int>
## 1 F FALSE 156
## 2 F TRUE 42
## 3 M FALSE 112
## 4 M TRUE 72
Interpretation: It seems that higher proportion of men have high alcohol use (72/112=64%) compared to women (42/156=27%). It does not say whether this difference is significant or not. My first hypothesis is confirmed in this data exploration.
library(ggplot2)
g1 <- ggplot(alc, aes(x = high_use, y = absences))
g2 <- g1 + geom_boxplot() + ggtitle("Student absences by alcohol consumption")
g2
Interpretation: The boxplot shows that high alcohol use is more likely among those with higher mean of absences from school. It does not say whether this difference is significant or not. My second hypothesis is confirmed in this data exploration.
g2 <- ggplot(alc, aes(x = high_use, y = goout))
g3 <- g2 + geom_boxplot() + ggtitle("Student goouts by alcohol consumption")
g3
Interpretation: Those who go out more frequently are more likely to use more alcohol. It does not say whether this difference is significant or not. My third hypothesis is confirmed in this data exploration.
g4 <- ggplot(alc, aes(x = high_use, y = famrel))
g5 <- g4 + geom_boxplot(aes(fill = famrel)) + ggtitle("Student family relations by alcohol consumption")
g5
Interpretation: This boxplot shows that those who reported higher-quality family relationships are less likely to use higher amounts of alcohol. It does not say whether this difference is significant or not. My fourth hypothesis is confirmed in this data exploration.
Logistic regression model
m <- glm(high_use ~ sex + absences + goout + famrel, data = alc, family = "binomial")
summary(m)
##
## Call:
## glm(formula = high_use ~ sex + absences + goout + famrel, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7151 -0.7820 -0.5137 0.7537 2.5463
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.76826 0.66170 -4.184 2.87e-05 ***
## sexM 1.01234 0.25895 3.909 9.25e-05 ***
## absences 0.08168 0.02200 3.713 0.000205 ***
## goout 0.76761 0.12316 6.232 4.59e-10 ***
## famrel -0.39378 0.14035 -2.806 0.005020 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 379.81 on 377 degrees of freedom
## AIC: 389.81
##
## Number of Fisher Scoring iterations: 4
coef(m)
## (Intercept) sexM absences goout famrel
## -2.76826342 1.01234164 0.08167686 0.76761101 -0.39378406
OR <- coef(m) %>% exp
confint(m)
## Waiting for profiling to be done...
## 2.5 % 97.5 %
## (Intercept) -4.10536151 -1.5026171
## sexM 0.51143363 1.5288392
## absences 0.03934417 0.1269797
## goout 0.53266069 1.0166758
## famrel -0.67216169 -0.1200006
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.06277092 0.01648406 0.2225470
## sexM 2.75203774 1.66768032 4.6128191
## absences 1.08510512 1.04012840 1.1353940
## goout 2.15461275 1.70345866 2.7639914
## famrel 0.67449969 0.51060362 0.8869199
Interpretation: The fitted logistic regression model indicates that all chosen variables, i.e., sex, absences, going out, and family relationships are significant explanators of alcohol use among the students. All chosen variables have a p-value of <0.001 except for the family relations which has a p-values of <0.01. According to Odds Ratios (ORs) we can say that: 1) male students are 2.7 times more likely (because OR is > 1) to have higher alcohol consumption compared to female students (p < 0.001). This confirms my hypothesis number 1. The CI 95% for the variable sex indicates that in 95% of the trials with this data the OR would be between 1.66 and 4.61. 2) For one unit increase in the number of absences, the odds of using more alcohol increases 1.08 times (OR is > 1, p < 0.001). This confirms my hypothesis number 2. The CI 95% for the variable absences indicates that in 95% of the trials with this data the OR would be between 1.04 and 1.13 which is a narrow interval. 3) For one unit increase in the number of going out episodes, the odds of using more alcohol increases 2.15 times (OR is > 1, p < 0.001). This confirms my hypothesis number 3. The CI 95% for the variable absences indicates that in 95% of the trials with this data the OR would be between 1.70 and 2.76. 4) For one unit increase in the quality of family relationships, the odds of using more alcohol decreases 0.67 times (OR is < 1, p < 0.01). This confirms my hypothesis number 4. The CI 95% for the variable family relations indicates that in 95% of the trials with this data the OR would be between 0.51 and 0.88.
Predictive power of the model
probabilities <- predict(m, type = "response")
alc <- mutate(alc, probability = probabilities)
alc <- mutate(alc, prediction = probability > 0.5)
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 254 14
## TRUE 64 50
Interpretation: Comparing the predicted values and actual values, we can see that out of 318 false predictions (low alcohol use) 254 were actually false (about 80%). Also, out of 64 true predictions (high alcohol use), 50 were actually true (78%).These figures indicate a predictive power of the fitted logistic regression model with about 80% accuracy.
Visualizing predictive power of the model
g <- ggplot(alc, aes(x = probability, y = high_use, col = prediction))
g1 <- g + geom_point()
g1
Interpretation: Also according to this plot, probability of successful classification seems to be good but not perfect because there are also wrong classifications by the model.
Bonus question: cross-validation
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2198953
Interpretation: Cross-validation is used to test the accuracy of the predictions of the model on independent unseen data. Data is splitted into two sets: training and test. Training dataset is the actual dataset that we use to train the model. The test dataset is the data used to provide an unbiased evaluation of a final model fit on the training dataset. We can perform multipple rounds of cross-validation (by splitting the dataset into K equal partitions and each time use one part as training data and other parts as test data, this process repeats for all parts) and then calculate the average predictive power for the model. This way we can reduce the variability because all observations are used for both training and testing. In the 10-fold cross-validation, the average number of wrong predictions (prediction errors) is 0.20, i.e., on average, the model wrongly predicts 20 times out of 100 times. In other words, prediction accuracy of the model is 80%. Since the model classification accuracy is 80% when predicting unseen data, so the model seems to have a good predictive power. This model has a lower prediction error (0.20) Compared to the prediction error calculated in the datacamp exercise, which had a prediction error of 0.26.
Super bonus question
m1 <- glm(high_use ~ sex + absences + goout + famrel + failures + Medu + Fedu + freetime + romantic + guardian, data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m1, K = 10)
cv$delta[1]
## [1] 0.2251309
Interpretation: In this model with 10 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.219. Let’s continue with other models!
m2 <- glm(high_use ~ sex + absences + goout + famrel + failures + Medu + Fedu + freetime , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m2, K = 10)
cv$delta[1]
## [1] 0.2251309
Interpretation: In this model with 8 predictors the results of 10-fold cross-validation shows that the average prediction error of this model is 0.225.
m3 <- glm(high_use ~ sex + absences + goout + famrel + failures + Medu + Fedu , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m3, K = 10)
cv$delta[1]
## [1] 0.2120419
Interpretation: In this model with 7 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.225 similar to m2 model.
m4 <- glm(high_use ~ sex + absences + goout + famrel + failures + Medu , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m4, K = 10)
cv$delta[1]
## [1] 0.2225131
Interpretation: In this model with 6 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.230.
m5 <- glm(high_use ~ sex + absences + goout + famrel + failures , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m5, K = 10)
cv$delta[1]
## [1] 0.2068063
Interpretation: In this model with 5 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.214.
m6 <- glm(high_use ~ sex + absences + goout + famrel , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m6, K = 10)
cv$delta[1]
## [1] 0.2146597
Interpretation: In this model with 4 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.214. These 4 predictors are the same ones entered into the original regression model for this week’s exercise. I had calculated its average predictions error as 0.204 and now it is slightly different (0.214). However, I do not think this difference would be a crucial problem.
m7 <- glm(high_use ~ sex + absences + goout , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m7, K = 10)
cv$delta[1]
## [1] 0.2041885
Interpretation: In this model with 3 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.219.
m8 <- glm(high_use ~ sex + absences , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m8, K = 10)
cv$delta[1]
## [1] 0.2591623
Interpretation: In this model with 2 predictors, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.261.
m9 <- glm(high_use ~ sex , data = alc, family = "binomial")
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2041885
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m9, K = 10)
cv$delta[1]
## [1] 0.2984293
Interpretation: Finally, in this model with only one predictor, the results of 10-fold cross-validation shows that the average prediction error of this model is 0.298.
Now let’s compare the above 8 models!
library(pROC)
## Type 'citation("pROC")' for a citation.
##
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
##
## cov, smooth, var
It seems I was only hoping to compare the above 8 models! Well, my hopes did not come true and I could not figure out how to draw the ROC curve. By the way, was the idea of this task similar to what I have done with fitting different regression models? Or is drawing ROC curve the right option for evaluation of these models required by the task? I hope at least I got the right idea! However, based on the average prediction errors calculated for each of the 8 models, it seems that including at least 3 predictors helps to reduce the prediction error. Having less than 3 predictors (2 or 1) makes more dramatic increases in prediction error compared to other models with at least 3 predictors.
title: “chapter4.Rmd” output: html_document author: Faranak Halali, 24.11.2019
This week’s task deals with clustering and classification methods. Boston dataset will be used.
Loading and exploring data
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
Interpretation: Boston dataset contains 506 rows (observations) with 14 variables (columns). It is about housing values in suburbs of Boston, USA. All the variables are numeric. Example variables are: crim = per capite crime rates, indus = proportion of non-retail business acres per town, nox = nitrogen oxides concentration (parts per 10 million).
Graphical overview and summary of the data
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
pairs(Boston)
Interpretation: Summary of the variables shows the min and max values of each variable plus the quartiles (Q1, Q2 = median, Q3) and mean. For example for the variable crim: the mean value for the per capite crime rate is 3.61. The plot matrix shows the relationships between pairs of variables. For example, there is a low-steep positive relationship between nox and age.
Standardize the dataset
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)
Interpretation: Here, we standardized (z-normalized) the dataset because different variables in the dataset may have different dimensions and measured on different scales. Looking into the results of data normalization, all the variables have now a mean of zero.
Create a categorical variable of the crime rate
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Interpretation: Based on the its quantiles, the original continuous crim variables was converted to a 4-level categorical variable called crime. The categories of this new variable are: low, med_low, med_high, high.
Drop the old crime variable and add the new categorical crime variable
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
Divide the dataset to train and test sets
n <- nrow(boston_scaled)
ind <- sample(n, size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
Save the correct classes from test data
correct_classes <- test$crime
Remove the crime variable from test data
test <- dplyr::select(test, -crime)
Linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2500000 0.2524752 0.2524752 0.2450495
##
## Group means:
## zn indus chas nox rm
## low 0.8863140 -0.8679296 -0.11640431 -0.8767697 0.4224225
## med_low -0.1133263 -0.2917037 0.03646311 -0.5598361 -0.1489488
## med_high -0.3930786 0.2102141 0.26805724 0.3670287 0.1169059
## high -0.4872402 1.0171737 -0.03371693 1.0943436 -0.4898540
## age dis rad tax ptratio
## low -0.8223131 0.8569202 -0.6896375 -0.7610600 -0.49167264
## med_low -0.3011092 0.3256777 -0.5438774 -0.4628171 -0.08135162
## med_high 0.3745006 -0.3602606 -0.3772371 -0.2806265 -0.25343402
## high 0.8482266 -0.8795925 1.6375616 1.5136504 0.78011702
## black lstat medv
## low 0.3730609 -0.74508349 0.51596801
## med_low 0.3201744 -0.12658420 -0.01550599
## med_high 0.1008496 -0.02107729 0.19172036
## high -0.7079159 0.94046615 -0.70064635
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10776416 0.72654110 -0.93408902
## indus -0.02583666 -0.26567443 0.10257380
## chas -0.08579500 -0.06471350 0.14059174
## nox 0.48820657 -0.80707914 -1.35134468
## rm -0.08356565 -0.11719292 -0.19751220
## age 0.29387471 -0.23408276 -0.07717808
## dis -0.03964767 -0.27916244 0.05346401
## rad 3.00329349 0.90079921 -0.18048856
## tax -0.04940688 0.07554163 0.73109985
## ptratio 0.12297691 -0.03730643 -0.24347046
## black -0.13613209 0.02462864 0.13611246
## lstat 0.23774413 -0.22293565 0.28072567
## medv 0.17401867 -0.39576588 -0.27914010
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9495 0.0370 0.0134
Interpretation: out of 404 observations (5060.80), 100 belong to low crime category, 99 to med_low, 105 to med_high, and 100 to high category. Next, we see the means of scaled variables in each of the 4 crime categories. Coefficients of linear discriminants show the linear combination of predictor variables that are used to form the LDA decision rule. for example, LD1 = 0.132zn + 0.039indus - 0.069chas,…… . Proportion of trace is the percentage separation in crime classes achieved by each discriminant function. In this case, LD1 has the highest separation percentage.
Plot the LDA results
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
Interpretation: I can say that the combination of ‘rad & zen & nox’ variables are the most influential separators of the crime categories.
Predict LDA
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 17 9 0 0
## med_low 5 13 6 0
## med_high 0 9 15 0
## high 0 0 0 28
Interpretation: LDA estimates the probability of a new set of inputs belonging to every class. The output class is the one that has the highest probability. In the test data, 32 observations are in the low crime category and LDA predicts 16 of them to be in low and the other 16 to be in med_low crime category (50% correct prediction). Out of 24 observations in the med_low crime category the LDA model predicts 20 in med_low and 4 in low category (83% correct prediction). Out of 24 med_high observations, the model predicts 12 in med_low and 12 in med_high category (50% correct prediction). Of 21 high crime observations the model predicts all to belong to high category (100% correct prediction). So, the highest correct predictions belong to the 2 ends of the crime levels, i.e., low and high.
Reload and standardize the Boston dataset
library(MASS)
data("Boston")
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)
Calculate the Euclidian and Manhattan distances between the observations
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
dist_man <- dist(boston_scaled, method = 'manhattan')
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
Interpretation: I calculated the similarity (distance) between the observations by two methods: Euclidian distances (EuD) and Manhattan distances (MD). EuD is the length of the line segment connecting the observations. The median and mean EuD are 5.13 and 5.15, respectively. Manhattan method calculates the distances between points with a different approach using only the absolute (positive) values of the points. In this dataset, the median and mean MD are 13.7 and 14.7, respectively.
K-means clustering and optimal number of clusters
set.seed(123)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
Visulaize the K-means clustering
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')
Interpretation: I used the within cluster sum of squares (WCSS) to determine the optimal number of clusters. In the figure we are looking for an ‘elbow’ which demarks significant drop in rate of increase in WCSS. K=2 seems a good choice here.
Run K-means clustering with K=2 and its visulaization
km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
Interpretation: Based on the figure, the variable ‘tax’in combination with most of the other variables distinguish the two clusters pretty well. Also, ’nox’&‘dis’ and ‘dis’&‘age’ distinguish the clusters better than the other variable combinations.
Bonus question
library(MASS)
data("Boston")
boston_scaled <- scale(Boston)
class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)
km1 <- kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km1$cluster)
boston_scaled$k_cluster <- as.factor(km1$cluster)
lda.fit.1 <- lda(k_cluster ~ ., data = boston_scaled)
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(boston_scaled$k_cluster)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit.1, myscale = 1)
Interpretation: I performed K-means clustering on the standardized dataset with defind number of clusters=4. 202 observations belonged to cluster 1, 123 to C2, 128 to C3 and 53 to C4. Cluster 4 includes much lower proportion of the observations than the other 3 clusters and this could indicate that maybe 4 clusters is not the perfect solution. In the biplot, I can say that the linear combination of ‘indus & rad & black & zn’ variables are the most influential separators of the clusters.
Super-bonus question
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
Interpretation: The two 3-D plots are similar from the position of the data points. In the second plot, where I defined the color argument, 4 different colors distinguish the data points and indicates to which crime category (low, med_low, med_high, high) each data point belongs. ***
Visualizing data
library(dplyr)
library(GGally)
##
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
##
## nasa
human_ <- read.csv("~/IODS-project/Data/human.csv", header=TRUE, sep=",")
human_ <- subset(human_, select = -c(X))
human_$Edu2.FM <- as.numeric(human_$Edu2.FM)
human_$Labo.FM <- as.numeric(human_$Labo.FM)
human_$Edu.Exp <- as.numeric(human_$Edu.Exp)
human_$Life.Exp <- as.numeric(human_$Life.Exp)
human_$GNI <- as.numeric(human_$GNI)
human_$Mat.Mor <- as.numeric(human_$Mat.Mor)
human_$Ado.Birth <- as.numeric(human_$Ado.Birth)
human_$Parli.F <- as.numeric(human_$Parli.F)
summary(human_)
## Edu2.FM Labo.FM Edu.Exp Life.Exp
## Min. :0.1717 Min. :0.1857 Min. : 5.40 Min. :49.00
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:11.25 1st Qu.:66.30
## Median :0.9375 Median :0.7535 Median :13.50 Median :74.20
## Mean :0.8529 Mean :0.7074 Mean :13.18 Mean :71.65
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:15.20 3rd Qu.:77.25
## Max. :1.4967 Max. :1.0380 Max. :20.20 Max. :83.50
## GNI Mat.Mor Ado.Birth Parli.F
## Min. : 1.0 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 39.5 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 78.0 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 78.0 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.:116.5 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :155.0 Max. :1100.0 Max. :204.80 Max. :57.50
library(corrplot)
## corrplot 0.84 loaded
ggpairs(human_)
cor(human_) %>% corrplot
Interpretation:In the variables’s summary we can see the 6 measures (min, Q1, median, mean, Q3, max) for the 8 variables of the data. In the pairs plot we can see the distributions of the variables as well as the correlation between each pair of them. For example, the variable expected years of education has a relatively normal distribution whereas maternal mortality is skewed to the left. Regarding the correlation coefficients, the strongest correlation (correlation coefficient = -0.857) is between maternal mortality and life expectancy which is a negative correlation, i.e., the lower rate of maternal mortality the higher life expectancy. The strongest positive correlation (correlation coefficient = 0.789) is between expected years of education and life expectancy. These outcomes are also confirmed in the correlation plot.
Principal component analysis (PCA) on unstandardized data and its biplots
pca <- prcomp(human_)
biplot(pca, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped
s <- summary(pca)
s
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 214.2937 44.75162 26.34667 11.4791 4.06656 1.60664
## Proportion of Variance 0.9416 0.04106 0.01423 0.0027 0.00034 0.00005
## Cumulative Proportion 0.9416 0.98267 0.99690 0.9996 0.99995 1.00000
## PC7 PC8
## Standard deviation 0.1905 0.1587
## Proportion of Variance 0.0000 0.0000
## Cumulative Proportion 1.0000 1.0000
pca_pr <- round(100*s$importance[2,], digits = 1)
pca_pr
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 94.2 4.1 1.4 0.3 0.0 0.0 0.0 0.0
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped
Principal component analysis (PCA) on the standardized data
human_std <- scale(human_)
pca_std <- prcomp(human_std)
biplot(pca_std, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
s1 <- summary(pca_std)
s1
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.9659 1.1387 0.9896 0.8662 0.69949 0.54002 0.46700
## Proportion of Variance 0.4831 0.1621 0.1224 0.0938 0.06116 0.03645 0.02726
## Cumulative Proportion 0.4831 0.6452 0.7676 0.8614 0.92254 0.95899 0.98625
## PC8
## Standard deviation 0.33165
## Proportion of Variance 0.01375
## Cumulative Proportion 1.00000
pca_pr1 <- round(100*s1$importance[2,], digits = 1)
pca_pr1
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## 48.3 16.2 12.2 9.4 6.1 3.6 2.7 1.4
pc_lab1 <- paste0(names(pca_pr1), " (", pca_pr1, "%)")
biplot(pca_std, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
Interpretation: PCA on unstandardized data yields 8 PCs where the first PC explains the highest variance (94.2%). Subsequently, the % of variance explained by the PCs decreases up to a point where PC7 and PC8 explain zero variance of the data. The biplot of the PCA on unstandardized data shows that the datapoints are not well categorized in the first 2 components. Also the contribution of the variables in PCs and their correlations with each other is not clear. We can only see that the variables maternal mortality and GNI have a small correlation (because of large angle between their arrows). Also, based on the directions of the arrows we can say that maternal mortality contributes to PC1 and GNI contributes to PC2.
Interpretations of the first two principal component dimensions after standardization PCA on standardized data yields way too much better and clearer results because the variables are measured with diffrent scales and have different dimensions and standardization solves this problem. The importance of the PCs of the standardzied PCA makes more sense (PC1 for 48.3% and PC2 for 16.2%). There is no PC with zero variance explained, although the last PCs explain smaller proportion of the variance. The biplot shows more clearly the variables and their correlations as well as the correlation between the variables and the PCs. For example, there is a high positive correlation between maternal mortality and adolscent birth rate (small angle between their arrows) and both these variables contribute to PC1 (direction of their arrows). Also, there is a smaller positive correlation between percent of females in parliament and higher labour force participation rate of females and both these variables contribute to the PC2 (because of the direction of their arrows).
Load the tea dataset and exploring it
library(FactoMineR)
data("tea")
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300 36
summary(tea)
## breakfast tea.time evening lunch
## breakfast :144 Not.tea time:131 evening :103 lunch : 44
## Not.breakfast:156 tea time :169 Not.evening:197 Not.lunch:256
##
##
##
##
##
## dinner always home work
## dinner : 21 always :103 home :291 Not.work:213
## Not.dinner:279 Not.always:197 Not.home: 9 work : 87
##
##
##
##
##
## tearoom friends resto pub
## Not.tearoom:242 friends :196 Not.resto:221 Not.pub:237
## tearoom : 58 Not.friends:104 resto : 79 pub : 63
##
##
##
##
##
## Tea How sugar how
## black : 74 alone:195 No.sugar:155 tea bag :170
## Earl Grey:193 lemon: 33 sugar :145 tea bag+unpackaged: 94
## green : 33 milk : 63 unpackaged : 36
## other: 9
##
##
##
## where price age sex
## chain store :192 p_branded : 95 Min. :15.00 F:178
## chain store+tea shop: 78 p_cheap : 7 1st Qu.:23.00 M:122
## tea shop : 30 p_private label: 21 Median :32.00
## p_unknown : 12 Mean :37.05
## p_upscale : 53 3rd Qu.:48.00
## p_variable :112 Max. :90.00
##
## SPC Sport age_Q frequency
## employee :59 Not.sportsman:121 15-24:92 1/day : 95
## middle :40 sportsman :179 25-34:69 1 to 2/week: 44
## non-worker :64 35-44:40 +2/day :127
## other worker:20 45-59:61 3 to 6/week: 34
## senior :35 +60 :38
## student :70
## workman :12
## escape.exoticism spirituality healthy
## escape-exoticism :142 Not.spirituality:206 healthy :210
## Not.escape-exoticism:158 spirituality : 94 Not.healthy: 90
##
##
##
##
##
## diuretic friendliness iron.absorption
## diuretic :174 friendliness :242 iron absorption : 31
## Not.diuretic:126 Not.friendliness: 58 Not.iron absorption:269
##
##
##
##
##
## feminine sophisticated slimming
## feminine :129 Not.sophisticated: 85 No.slimming:255
## Not.feminine:171 sophisticated :215 slimming : 45
##
##
##
##
##
## exciting relaxing effect.on.health
## exciting :116 No.relaxing:113 effect on health : 66
## No.exciting:184 relaxing :187 No.effect on health:234
##
##
##
##
##
Interpretation: Tea data includes answers to a questionnaire on tea consumption. It has 300 onservations and 36 variables. 300 individuals were asked about how they drink tea (18 questions), what are their product’s perception (12 questions) and some personal details (4 questions). Except for the age, all the variables are categorical. For the age, the data set has two different variables: a continuous and a categorical one. For example, 74 individuals drink black tea, 193 drink earl grey and 33 drink green tea. Participants are 15-90 years old with a mean age of 37 years with most of them belonging to the 15-24 years age group.
Keeping some columns of the tea data and visulaizing it
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- select(tea, one_of(keep_columns))
summary(tea_time)
## Tea How how sugar
## black : 74 alone:195 tea bag :170 No.sugar:155
## Earl Grey:193 lemon: 33 tea bag+unpackaged: 94 sugar :145
## green : 33 milk : 63 unpackaged : 36
## other: 9
## where lunch
## chain store :192 lunch : 44
## chain store+tea shop: 78 Not.lunch:256
## tea shop : 30
##
str(tea_time)
## 'data.frame': 300 obs. of 6 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
dim(tea_time)
## [1] 300 6
library(ggplot2)
library(dplyr)
library(tidyr)
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
Interpretation: Now, tea_time data has 300 onservations and only 6 variables. From the plots we can see that: 1) Most of the participants drink tea bag. 2) Most of the participants drink tea alone with no added ingredient. 3) Most of the participants do not drink tea in lunch time. 4) There is a very small differene between the number of people who either add or do not add sugar to their tea. 5) Most of the participants choose earl grey tea to drink. 6) Most of the participants buy their tea from the chain stores.
Multiple Correspondence Analysis on the tea data
mca <- MCA(tea_time, graph = FALSE)
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.279 0.261 0.219 0.189 0.177 0.156
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.144 0.141 0.117 0.087 0.062
## % of var. 7.841 7.705 6.392 4.724 3.385
## Cumulative % of var. 77.794 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898
## cos2 v.test Dim.3 ctr cos2 v.test
## black 0.003 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 0.027 2.867 | 0.433 9.160 0.338 10.053 |
## green 0.107 -5.669 | -0.108 0.098 0.001 -0.659 |
## alone 0.127 -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 0.035 3.226 | 1.329 14.771 0.218 8.081 |
## milk 0.020 2.422 | 0.013 0.003 0.000 0.116 |
## other 0.102 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag 0.161 -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 0.478 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged 0.141 -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
plot(mca, invisible=c("ind"), habillage = "quali")
Interpretation: The eigenvalues show the proportion of the data variance explained by each dimension. Here, the first dim explains 15% of the varaince, the second dim explains 14% of the variance and this goes on for the other 9 dimensions. Then we have the contribution of the 10 first individuals to the first three dimensions. In the categories part there is information about the contribution of the first 10 variable categories to the dimensions. The variable categories with the larger value, contribute the most to the definition of the dimensions. Variable categories that contribute the most to Dim.1 and Dim.2 are the most important in explaining the variability in the data set. In the categorical variables part we can see the squared correlation between each variable and the dimensions. The biplot, which is the variables and dimensions biplot, shows that the categories unpackaged and tea shop have an important contribution to the positive pole of the first dimension, while the categories tea bag and chain store have a major contribution to the negative pole of the first dimension; etc, …. According to the distance between the variable categories in the biplot, we can say that for example the categories tea bag and chain store are more close and so more similar to each other than the tea bag and milk. The variables unpackaged and tea shop are similar to each other but are different from all the other categories.